This model achieves an average F1 score of 0.806, AUC of 0.832, and recall of 0.925. We discuss the implications of the selected features, temporal quantity of data, and modality.Inertial measurement units (IMU) have been used for gait analysis in many clinical studies, as a more convenient, low cost and less restricted alternative to the laboratory-based motion capture systems or instrumented walkways. Spatial-temporal gait parameters such as gait cycle duration and stride length calculated from the IMUs were often used in these studies for evaluating the impaired gait. However, the spatial-temporal information provided by IMUs is limited, and sometime suffers incomplete and less effective evaluation. In this study, we develop a novel IMU-based method for clinical gait evaluation. Nine gait variables including three spatial-temporal parameters and six kinematic parameters are extracted from two shank-mounted IMUs for quantifying patient's gait deviations. Based on those parameters, an IMU-based gait normalcy index (INI) is derived to evaluate the overall gait performance. Eight inpatient subjects with gait impairments caused by n-hexane neuropathy and ten healthy subjects were recruited. The proposed gait variables and INI were examined on the inpatients at three to five time instants during the rehabilitation process until being discharged. A comparison with healthy subjects and statistical analysis for the changes of gait variables and INI demonstrated that the proposed new set of gait variables and INI can provide adequate and effective information for quantifying gait abnormalities, and help understanding the progress of gait and effectiveness of therapy during rehabilitation process.Model-based Bayesian frameworks proved their effectiveness in the field of ECG processing. However, their performances rely heavily on the pre-defined models extracted from ECG signals. Furthermore, their performances decrease substantially when ECG signals do not comply with their models- a situation generally occurs in the case of arrhythmia-. In this paper, we propose a novel Bayesian framework based on Kalman filter, which does not need a predefined model and can adapt itself to different ECG morphologies. Compared with the previous Bayesian techniques, the proposed method requires much less preprocessing and it only needs to know the location of R-peaks to start ECG processing. Our method uses a filter bank comprised of two adaptive Kalman filters, one for denoising QRS complex (high frequency section) and another one for denoising P and T waves (low frequency section). The parameters of these filters are estimated and iteratively updated using expectation maximization (EM) algorithm. In order to deal with nonstationary noises such as muscle artifact (MA) noise, we used Bryson and Henrikson's technique for the prediction and update steps inside the Kalman filter bank. We evaluated the performance of the proposed method on different ECG databases containing signals having morphological changes and abnormalities such as atrial premature complex (APC), premature ventricular contractions (PVC), VT (Ventricular Tachyarrhythmia) and sudden cardiac death. The proposed algorithm was compared with several popular ECG denoising methods such as wavelet transform (WD), extended Kalman filter (EKF) and empirical mode decomposition (EMD). https://www.selleckchem.com/products/pyrrolidinedithiocarbamate-ammoniumammonium.html The comparison results showed that the proposed method performs well in the presence of various ECG morphologies in both stationary and non-stationary environments especially at low input SNRs.The identification of retinal lesions plays a vital role in accurately classifying and grading retinopathy. Many researchers have presented studies on optical coherence tomography (OCT) based retinal image analysis over the past. However, to the best of our knowledge, there is no framework yet available that can extract retinal lesions from multi-vendor OCT scans and utilize them for the intuitive severity grading of the human retina. To cater this lack, we propose a deep retinal analysis and grading framework (RAG-FW). RAG-FW is a hybrid convolutional framework that extracts multiple retinal lesions from OCT scans and utilizes them for lesion-influenced grading of retinopathy as per the clinical standards. RAG-FW has been rigorously tested on 43,613 scans from five highly complex publicly available datasets, containing multi-vendor scans, where it achieved the mean intersection-over-union score of 0.8055 for extracting the retinal lesions and the accuracy of 98.70% for the correct severity grading of retinopathy.This article studies the adaptive neural controller design for a class of uncertain multiagent systems described by ordinary differential equations (ODEs) and beams. Three kinds of agent models are considered in this study, i.e., beams, nonlinear ODEs, and coupled ODE and beams. Both beams and ODEs contain completely unknown nonlinearities. Moreover, the control signals are assumed to suffer from a class of generalized backlash nonlinearities. First, neural networks (NNs) are adopted to approximate the completely unknown nonlinearities. New barrier Lyapunov functions are constructed to guarantee the compact set conditions of the NNs. Second, new adaptive neural proportional integral (PI)-type controllers are proposed for the networked ODEs and beams. The parameters of the PI controllers are adaptively tuned by NNs, which can make the system output remain in a prescribed time-varying constraint. Two illustrative examples are presented to demonstrate the advantages of the obtained results.Convolutional neural networks (CNNs) have shown an effective way to learn spatiotemporal representation for action recognition in videos. However, most traditional action recognition algorithms do not employ the attention mechanism to focus on essential parts of video frames that are relevant to the action. In this article, we propose a novel global and local knowledge-aware attention network to address this challenge for action recognition. The proposed network incorporates two types of attention mechanism called statistic-based attention (SA) and learning-based attention (LA) to attach higher importance to the crucial elements in each video frame. As global pooling (GP) models capture global information, while attention models focus on the significant details to make full use of their implicit complementary advantages, our network adopts a three-stream architecture, including two attention streams and a GP stream. Each attention stream employs a fusion layer to combine global and local information and produces composite features.